Fairness in Language Models: A Tutorial

Overview

Language Models (LMs) have demonstrated remarkable success across various domains over the years. However, despite their promising performance on various real world tasks, most of these algorithms lack fairness considerations, potentially leading to discriminatory outcomes against marginalized demographic groups and individuals. Many recent publications have explored ways to mitigate bias in LMs. Nevertheless, a comprehensive understanding of the root causes of bias, their effects, and possible limitations of LMs from the perspective of fairness is still in its early stages. To bridge this gap, this tutorial provides a systematic overview of recent advances in fair LMs, beginning with real-world case studies, followed by an analysis of bias causes. We then explore fairness concepts specific to LMs, summarizing bias evaluation strategies and algorithms designed to promote fairness. Finally, we analyze bias in LM datasets and discuss current research challenges and open questions in the field.

Our tutorial is structured into five key parts:

  • Background on LMs
  • Quantifying Bias in LMs
  • Mitigating Bias in LMs
  • Resources for Fairness in LMs
  • Future Directions

This tutorial is grounded in our surveys and established benchmarks, all available as open-source resources :



Speakers

Zichong Wang
Ph.D. Candidate
Florida International University
Read more →
Avash Palikhe
Ph.D. Student
Florida International University
Read more →
Zhipeng Yin
Ph.D. Candidate
Florida International University
Read more →
Jiale Zhang
Graduate Student
University of Leeds
Read more →
Wenbin Zhang
Assistant Professor
Florida International University
Read more →

Agenda

14:00 - 14:30

Part I: Background on LMs

Room 518AB
- History of LMs
- Root Causes of Bias in LMs

14:30 - 13:10

Part II: Quantifying Bias in LMs

Room 518AB
- Fairness definitions for Encoder-only LMs
- Fairness definitions for Decoder-only LMs
- Fairness definitions for Encoder decoder LMs

13:10 - 13:50

Part III: Mitigating Bias in LMs

Room 518AB
- Pre-processing
- In-processing
- Intra-processing
- Post-processing

13:50 - 14:30

Part IV: Resources for Fairness in LMs

Room 518AB
- Datasets for fairness in LMs
- Other resources for fairness in LMs

14:30 - 15:00

Part V: Future Directions

Room 518AB
- Rational Counterfactual Data Augmentation
- Balancing Performance and Fairness in LMs
- Fulfilling Multiple Types of Fairness
- Theoretical Analysis and Guarantees
- Developing More and Tailored Datasets